Hinges and crises
The second post in the sequence covers a group of motivations related to crises.
We often talk about the hinge of history, a period of high influence over the whole future trajectory of life. If we grant that our century is such a hinge, it’s unlikely that the “hinginess” is distributed uniformly across the century; instead, it seems much more likely it will be concentrated to some particular decades, years, and months, which will have much larger influence. It also seems likely that some of these “hingy” periods will look eventful and be understood as crises at the time. So understanding crises, and the ability to act during crises, may be particularly important for influencing the long-term future.
The first post in this sequence mentioned my main reason to work on COVID: it let me test my models of the world, and so informed my longtermist work. This post presents some other reasons, related to the above argument about hinges. None of these reasons would have been sufficient for me personally on their own, but they still carry weight, and should be sufficient for others in the next crisis.[1]
An exemplar crisis with a timescale of months
COVID has commonalities with some existential risk scenarios. (See Krakovna.) Lessons from it could transfer to risks in which:
the crisis unfolds over a similar timescale (weeks or years, rather than seconds or hours),
governments have some role,
the risk is at least partially visible,
the general population is engaged in some way.
This makes COVID a more useful comparison for versions of continuous AI takeoff where governments are struggling to understand an unfolding situation, but in which they have options to act and/or regulate. Similarly, it is a useful model for versions of any x-risk where a large fraction of academia suddenly focuses on a topic previously studied by a small group, and resources spent on the topic increase by many orders of magnitude. This emergency research push is likely in scenarios with a warning shot or sufficiently loud fire alarm that gets noticed by academia.
On the other hand, lessons learned from COVID will be correspondingly less useful for cases where few of the above assumptions hold (e.g. “an AI in a box bursts out in an intelligence explosion on the timescale of hours”).
Crisis and opportunity
Crises often bring opportunities to change the established order, and, for example, policy options that were outside the Overton window can suddenly become real. (This was noted pre-COVID by Anders Sandberg.) There can also be rapid developments in relevant disciplines and technologies.
Some examples of Overton shifts during COVID include: total border closures (in the West), large-scale and prolonged stay-at-home orders, mask mandates, unconditional payouts to large fractions of the population, and automatic data-driven control policies.
Technology developments include the familiar new vaccine platforms (mRNA, DNA) going to production, massive deployment of rapid tests, and the unprecedented use of digital contact tracing.
(Note that many other opportunities which opened up were not acted on.)
Taking advantage of such opportunities may depend on factors such as “do we have a relevant policy proposal in the drawer?”, “do we have a team of experts able to advise?” or “do we have a relevant network?”. These can be prepared in advance.
Default example for humanity thinking about large-scale risk
COVID will likely become the go-to example of a large-scale, seemingly low-probability risk we were unprepared for. The ability to shape narratives and attention around COVID could be important for the broader problem of how humanity should deal with other such risks.
While there is a clear philosophical distinction between existential risks and merely catastrophic risks, 1) in practice it may be difficult to tell the ultimate scale of some risks, and 2) most people will not understand the distinction between GCRs and x-risks in an intuitive way (understanding both as merely “extremely large”). So narratives and research surrounding GCRs are important for work on x-risk.
Conclusion
The above are why it made sense to pay attention to COVID, even if the pandemic’s direct impact on the trajectory of humanity is small. (In some ways it still makes sense to pay attention.)
The broader conclusion is that longtermists’ ability to observe, orient themselves, decide and act during crises may be critical to influencing long-term outcomes.
The usual ontology of longtermist interventions partitions the space according to “cause areas” or “risks”, leaving room for the unknown “cause X”. An alternative, almost orthogonal view partitions interventions according to the time scale of the OODA loop (i.e. the decision and action process) they implement.
On this view, longtermism has so far focussed on actions in the top row, that have OODA loops on the horizon of years and decades. Typical examples might be writing books that fix the basic framing of a field, basic research, or community building.
While there is a lot of commonality in actions along a column (e.g. at all timescales, the AI risk field will want to do AI research), there is also a lot that would be common interventions across a row (e.g. all cause areas may will need to know how governement may pass emergency regulation on a timescale of days).
The skills and capabilities needed to act on a scale of months, weeks, or days seem relatively undeveloped. The following posts will make specific suggestions for what to improve in this regard, based on our experience with COVID—in particular the rather obvious suggestion of creating a longtermist “emergency response team” devoted to fast action.
At the same time, I suggest taking this framing as a prompt: what else are we not doing? Where else is the table filled less than it should be?
- ^
I worked on the covid crisis at the expense of working directly on AI alignment and macro strategy at FHI, which is a very high bar.
- Case for emergency response teams by 5 Apr 2022 11:08 UTC; 247 points) (
- Experimental longtermism: theory needs data by 15 Mar 2022 10:05 UTC; 186 points) (
- Experimental longtermism: theory needs data by 24 Mar 2022 8:23 UTC; 52 points) (LessWrong;
- Case for emergency response teams by 5 Apr 2022 12:45 UTC; 17 points) (LessWrong;
I just came across Sammy Martin’s June 2020 post on similar considerations, and discussing the world’s control system. Honour to him!
> The Morituri Nolumus Mori effect, as a reminder, is the thesis that governments and individuals have a consistent, short-term reaction to danger which is stronger than many of us suspected, though not sustainable in the absence of an imminent threat.
Bunch of prior art with the same idea
Sam Hilton
Sammy Martin
Tyler Alterman
If your goal is very hard to achieve then you could also have an extraordinarily high impact in situations where things are working unusually well. In a well-functioning organization or government, any individual’s reach is higher than it would be in a more poorly functioning organization.
Frustrating that the lines between the rows in that table kind of shift and wiggle because of the various kinds and unclear relative importance and meaning of different kinds of “feedback”.
The canonical example is Einstein getting special relativity basically perfectly right with almost no reality-feedback because he had math-feedback. Started with a good amount of data but so do we.
In AI research & bio we are blessed with several kinds of useful feedback at several timescales, although the ultimate review hasn’t come yet.
I’m tempted to be rude and say your first post should be titled “tips for interacting with large orgs”. I may be misunderstanding you so that comment isnt really granted. If you did title it that though, I would be just as interested.
For planes, a hill had enough feedback. For masks, a kitchen spray faucet is maybe enough if you’re honest with yourself. The US military gets mountains of data about its operations and their failure causes but whoever is running things might do better having a date night with their diary than having a presentation from their intelligence officers. I don’t think data/experiments are the big missing piece across the board. In policy of course it is about practice and structures and connections 95%...
So all this to say: there are most likely big ways we can get more feedback on all our longterm efforts and we certainly ought to, but I expect that this advice will need to be extremely specific to be useful, and that people are already trying very hard all the time to get meaningful data, and that just saying moar experimetns won’t get us far.
Sorry, but this seems to me to confuse the topic of the post “Experimental Longtermism” and the topic of this post. Note that the posts are independent, and about different topics.
The table in this post is about timescales of OODA loops (observe–orient–decide–act), not about feedback. For example, in a situation which is unfolding on a timescale of days and weeks, as was the early response to covid, some actions are just too slow to have an impact: for example, writing a book, or funding basic research. The same is true for some decision and observation making processes: the speed of the whole loop matters, and if the decide part needs, for example, to 50 ppl from 5 different organizations to convene and reach a consensus, it won’t work.
The first O in OODA implies something new to observe, no? And within the OODAL of a city there are many smaller loops where eg you see if your friend where’s a mask if you ask them.
And with the ToC and such I thought the first post was kind of an introduction/abstract.
Anyway I’m looking forward to these posts and very curious what the OODA loop of a city looks like